還記得[Day29] 爬蟲實戰演練 - iThome文章標題抓下來的內容嗎?有沒有發現它只有我第一頁文章的標題,明明我就有超級多優質的文章(自己說),這是因為給的網址就是第一頁的網址而已,程式當然不會幫你抓之後的資訊,那我要怎麼樣才能抓完所有的呢?就要結合上上篇提到的Selenium套件來模擬人點選下一頁。
最一開始先import需要用到的package,並開啟ChromeDriver,再透過BeautifulSoup的get()抓取網頁程式碼。
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from bs4 import BeautifulSoup
s = Service(executable_path=r'./chromedriver')
driver = webdriver.Chrome(service=s)
driver.get("https://ithelp.ithome.com.tw/users/20140998/ironman/4362?page=1")
接著先找到所需資訊在HTML裡的所在位置,和之前是一樣的寫法。
html = BeautifulSoup(driver.page_source, "html.parser")
elements = html.find_all("div", {"class": "qa-list"})
for element in elements:
title = element.find("a", {"class", "qa-list__title-link"}).getText().strip()
print(title)
print("-"*30)
再來就是要動態抓取了,抓完第一頁我還要去第二頁找,抓完第二頁再去第三頁...,這邊就加上迴圈。page_next = driver.find_elements(By.XPATH, "//div[@class='profile-pagination']//li/a")[-1]
是讓程式找到「下一頁」的所在位置,這邊我使用By.XPATH
來定位轉換頁面的按鈕,由於畫面底下有很多頁的選項,//li/a
找到的會是每一個按鈕,但我要的是「下一頁」,它是所有按鈕中的最後一個,就可以用[-1]快速找到!
from selenium.webdriver.common.by import By
import time
for page in range(1, 5): # 執行1~4頁
html = BeautifulSoup(driver.page_source, "html.parser")
elements = html.find_all("div", {"class": "qa-list"})
print("-"*10, "第", page, "頁", "-"*10)
for element in elements:
title = element.find("a", {"class", "qa-list__title-link"}).getText().strip()
print(title)
print("-"*30)
page_next = driver.find_elements(By.XPATH, "//div[@class='profile-pagination']//li/a")[-1]
page_next.click() # 點擊下一頁按鈕
time.sleep(1) # 暫停1秒
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from bs4 import BeautifulSoup
import time
s = Service(executable_path=r'./chromedriver')
driver = webdriver.Chrome(service=s)
driver.get("https://ithelp.ithome.com.tw/users/20140998/ironman/4362?page=1")
for page in range(1, 5): # 執行1~4頁
html = BeautifulSoup(driver.page_source, "html.parser")
elements = html.find_all("div", {"class": "qa-list"})
print("-"*10, "第", page, "頁", "-"*10)
for element in elements:
title = element.find("a", {"class", "qa-list__title-link"}).getText().strip()
print(title)
print("-"*30)
page_next = driver.find_elements(By.XPATH, "//div[@class='profile-pagination']//li/a")[-1]
page_next.click() # 點擊下一頁按鈕
time.sleep(1) # 暫停1秒
這個系列到這邊就結束啦~希望看完這30篇可以幫助你利用爬蟲快速取得你想要的資料
如果看完閒閒沒事可以繼續關注我的另一個鐵人賽系列,因為另一系列比較晚開賽,所以這幾天還會繼續更新,祝我可以順利完賽(?),也祝大家寫扣順利~
2025/09更新
時隔多年,我已經發了好幾系列的鐵人賽了,偶爾也會想紀錄我所有貼文的瀏覽量,但一筆一筆手算真的有夠累(我真的幹過這種事),所以重寫了一個可以抓取標題、Like、留言數、瀏覽數的爬蟲程式碼,讓我留存紀錄也提供給有相同需求的朋朋!
# 需求:pip install selenium
# 備註:Selenium 4 之後內建 Selenium Manager,通常不需手動下載 chromedriver。
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import csv, time, re
USER_ID = "20140998" # 換成你的使用者 ID
SERIES_ID = "4362" # 鐵人賽系列 ID
PAGES = 4 # 要抓幾頁
def digits(s: str) -> str:
m = re.search(r"\d[\d,]*", s or "")
return m.group(0).replace(",", "") if m else "0"
def text(el) -> str:
return (el.get_attribute("innerText") or "").strip()
opts = webdriver.ChromeOptions()
# 需要無頭就解開下一行
# opts.add_argument("--headless=new")
opts.add_argument("--disable-blink-features=AutomationControlled")
opts.add_argument("--lang=zh-TW")
opts.add_argument("--user-agent=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128 Safari/537.36")
driver = webdriver.Chrome(options=opts)
wait = WebDriverWait(driver, 25)
BASE = f"https://ithelp.ithome.com.tw/users/{USER_ID}/ironman/{SERIES_ID}"
def read_counts_from_row(row):
# 清單頁的統計容器在 profile-list__condition
boxes = row.find_elements(By.CSS_SELECTOR, "div.profile-list__condition a.qa-condition")
like = comment = view = "0"
for b in boxes:
# 有些站在元素進入視窗時才會填數字
driver.execute_script("arguments[0].scrollIntoView({block:'center'});", b)
label = text(b.find_element(By.CSS_SELECTOR, ".qa-condition__text")) # Like / 留言 / 瀏覽
count = digits(text(b.find_element(By.CSS_SELECTOR, ".qa-condition__count")))
if "Like" in label:
like = count
elif "留言" in label:
comment = count
elif "瀏覽" in label:
view = count
return like, comment, view
rows_out = []
try:
for page in range(1, PAGES + 1):
driver.get(f"{BASE}?page={page}")
wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, "div.qa-list")))
print(f"---------- 第 {page} 頁 ----------")
rows = driver.find_elements(By.CSS_SELECTOR, "div.qa-list")
for row in rows:
# 滾動到每列,避免延遲渲染
driver.execute_script("arguments[0].scrollIntoView({block:'center'});", row)
title = text(row.find_element(By.CSS_SELECTOR, "a.qa-list__title-link"))
like, comment, view = read_counts_from_row(row)
print(title)
print(f"Like: {like} / 留言: {comment} / 瀏覽: {view}")
print("-"*30)
rows_out.append([title, like, comment, view])
time.sleep(0.3)
finally:
driver.quit()
# 存成 CSV
with open("ithome_articles_counts.csv", "w", newline="", encoding="utf-8-sig") as f:
w = csv.writer(f)
w.writerow(["標題", "Like", "留言", "瀏覽"])
w.writerows(rows_out)
print("已輸出:ithome_articles_counts.csv")
BTW 因為我有好幾的系列,所以還有另一個版本是可以一次爬好幾個系列的鐵人賽文章,我的程式碼放在這裡:鐵人賽爬蟲